在过去的二十年中,对机器人羊群的研究受到了极大的关注。在本文中,我们提出了一种约束驱动的控制算法,该算法可最大程度地减少单个试剂的能耗并产生新兴的V形成。随着代理之间的分散相互作用的形成出现,我们的方法对自发添加或将代理去除为系统是强大的。首先,我们提出了一个分析模型,用于在固定翼无人机后面的尾巴上洗涤,并得出了尾随无人机以最大化其旅行耐力的最佳空气速度。接下来,我们证明,简单地在最佳空速上飞行将永远不会导致新兴的羊群行为,并且我们提出了一种新的分散的“ Anseroid”行为,从而产生出现的V形成。我们用约束驱动的控制算法编码这些行为,该算法最小化每个无人机的机车能力。最后,我们证明,在我们提出的控制法律下,以近似V或eChelon形成初始化的无人机将融合,我们证明了这种出现在模拟和与Crazyflie四肢旋转机队的实验中实时发生。
translated by 谷歌翻译
对于宏观尺度的简单机器人平台,对群系统的控制相对良好地理解。然而,关于微型摩托管可以实现类似的结果,仍有几个未解决的问题。本文提出了一种基于全球磁场下磁化自推进Janus Microorobots的动态模型的建模框架。我们通过实验验证我们的模型,并提供了可以旨在准确描述微机器的行为的方法,同时建模他们的同时控制。该模型可以广泛地推广到低雷诺数环境中的其他微生物平台。
translated by 谷歌翻译
分列已被利用作为用于最小化能量消耗的车辆方法。在本文中,我们提出了一个限制驱动的最佳控制框架,从而产生了在开放式运输系统中运行的连接和自动车辆的紧急排行行为。我们的方法将最近的洞察于约束驱动的最佳控制与高速公路设置中车辆之间的物理空气动力学相互作用相结合。结果是一系列描述,当排中是适当的策略时,以及产生紧急排行行为的描述性最佳控制法。最后,我们在模拟中展示了这些属性。
translated by 谷歌翻译
当我们转向越来越复杂的网络物理系统(CPS)时,需要新方法来实时计划有效的状态轨迹。在本文中,我们提出了一种方法来显着降低对一类CPS解决最佳控制问题的复杂性。我们利用差分平稳度的性质来简化Euler-拉格朗日方程,这简化消除了一般情况下出现的数值不稳定性。我们还提出了一种明确的微分方程,描述了最佳状态轨迹的演变,我们扩展了我们的结果,以考虑无约束和受限制的情况。此外,我们通过在具有障碍物的环境中生成双积分代理的最佳轨迹来证明我们的方法的性能。在仿真中,与现有的基于焊接的最佳控制库相比,我们的方法显示了30%的成本降低,并且计算速度提高了几乎3倍。
translated by 谷歌翻译
The Me 163 was a Second World War fighter airplane and a result of the German air force secret developments. One of these airplanes is currently owned and displayed in the historic aircraft exhibition of the Deutsches Museum in Munich, Germany. To gain insights with respect to its history, design and state of preservation, a complete CT scan was obtained using an industrial XXL-computer tomography scanner. Using the CT data from the Me 163, all its details can visually be examined at various levels, ranging from the complete hull down to single sprockets and rivets. However, while a trained human observer can identify and interpret the volumetric data with all its parts and connections, a virtual dissection of the airplane and all its different parts would be quite desirable. Nevertheless, this means, that an instance segmentation of all components and objects of interest into disjoint entities from the CT data is necessary. As of currently, no adequate computer-assisted tools for automated or semi-automated segmentation of such XXL-airplane data are available, in a first step, an interactive data annotation and object labeling process has been established. So far, seven 512 x 512 x 512 voxel sub-volumes from the Me 163 airplane have been annotated and labeled, whose results can potentially be used for various new applications in the field of digital heritage, non-destructive testing, or machine-learning. This work describes the data acquisition process of the airplane using an industrial XXL-CT scanner, outlines the interactive segmentation and labeling scheme to annotate sub-volumes of the airplane's CT data, describes and discusses various challenges with respect to interpreting and handling the annotated and labeled data.
translated by 谷歌翻译
Statistical analysis and modeling is becoming increasingly popular for the world's leading organizations, especially for professional NBA teams. Sophisticated methods and models of sport talent evaluation have been created for this purpose. In this research, we present a different perspective from the dominant tactic of statistical data analysis. Based on a strategy that NBA teams have followed in the past, hiring human professionals, we deploy image analysis and Convolutional Neural Networks in an attempt to predict the career trajectory of newly drafted players from each draft class. We created a database consisting of about 1500 image data from players from every draft since 1990. We then divided the players into five different quality classes based on their expected NBA career. Next, we trained popular pre-trained image classification models in our data and conducted a series of tests in an attempt to create models that give reliable predictions of the rookie players' careers. The results of this study suggest that there is a potential correlation between facial characteristics and athletic talent, worth of further investigation.
translated by 谷歌翻译
This project explores the feasibility of remote patient monitoring based on the analysis of 3D movements captured with smartwatches. We base our analysis on the Kinematic Theory of Rapid Human Movement. We have validated our research in a real case scenario for stroke rehabilitation at the Guttmann Institute5 (neurorehabilitation hospital), showing promising results. Our work could have a great impact in remote healthcare applications, improving the medical efficiency and reducing the healthcare costs. Future steps include more clinical validation, developing multi-modal analysis architectures (analysing data from sensors, images, audio, etc.), and exploring the application of our technology to monitor other neurodegenerative diseases.
translated by 谷歌翻译
Our goal is to reconstruct tomographic images with few measurements and a low signal-to-noise ratio. In clinical imaging, this helps to improve patient comfort and reduce radiation exposure. As quantum computing advances, we propose to use an adiabatic quantum computer and associated hybrid methods to solve the reconstruction problem. Tomographic reconstruction is an ill-posed inverse problem. We test our reconstruction technique for image size, noise content, and underdetermination of the measured projection data. We then present the reconstructed binary and integer-valued images of up to 32 by 32 pixels. The demonstrated method competes with traditional reconstruction algorithms and is superior in terms of robustness to noise and reconstructions from few projections. We postulate that hybrid quantum computing will soon reach maturity for real applications in tomographic reconstruction. Finally, we point out the current limitations regarding the problem size and interpretability of the algorithm.
translated by 谷歌翻译
Development funds are essential to finance climate change adaptation and are thus an important part of international climate policy. % However, the absence of a common reporting practice makes it difficult to assess the amount and distribution of such funds. Research has questioned the credibility of reported figures, indicating that adaptation financing is in fact lower than published figures suggest. Projects claiming a greater relevance to climate change adaptation than they target are referred to as "overreported". To estimate realistic rates of overreporting in large data sets over times, we propose an approach based on state-of-the-art text classification. To date, assessments of credibility have relied on small, manually evaluated samples. We use such a sample data set to train a classifier with an accuracy of $89.81\% \pm 0.83\%$ (tenfold cross-validation) and extrapolate to larger data sets to identify overreporting. Additionally, we propose a method that incorporates evidence of smaller, higher-quality data to correct predicted rates using Bayes' theorem. This enables a comparison of different annotation schemes to estimate the degree of overreporting in climate change adaptation. Our results support findings that indicate extensive overreporting of $32.03\%$ with a credible interval of $[19.81\%;48.34\%]$.
translated by 谷歌翻译
Computer-aided systems in histopathology are often challenged by various sources of domain shift that impact the performance of these algorithms considerably. We investigated the potential of using self-supervised pre-training to overcome scanner-induced domain shifts for the downstream task of tumor segmentation. For this, we present the Barlow Triplets to learn scanner-invariant representations from a multi-scanner dataset with local image correspondences. We show that self-supervised pre-training successfully aligned different scanner representations, which, interestingly only results in a limited benefit for our downstream task. We thereby provide insights into the influence of scanner characteristics for downstream applications and contribute to a better understanding of why established self-supervised methods have not yet shown the same success on histopathology data as they have for natural images.
translated by 谷歌翻译